ISSN: 2074-9007 (Print)
ISSN: 2074-9015 (Online)
DOI: https://doi.org/10.5815/ijitcs
Website: https://www.mecs-press.org/ijitcs
Published By: MECS Press
Frequency: 6 issues per year
Number(s) Available: 138
IJITCS is committed to bridge the theory and practice of information technology and computer science. From innovative ideas to specific algorithms and full system implementations, IJITCS publishes original, peer-reviewed, and high quality articles in the areas of information technology and computer science. IJITCS is a well-indexed scholarly journal and is indispensable reading and references for people working at the cutting edge of information technology and computer science applications.
IJITCS has been abstracted or indexed by several world class databases: Scopus, Google Scholar, Microsoft Academic Search, CrossRef, Baidu Wenku, IndexCopernicus, IET Inspec, EBSCO, VINITI, JournalSeek, ULRICH's Periodicals Directory, WorldCat, Scirus, Academic Journals Database, Stanford University Libraries, Cornell University Library, UniSA Library, CNKI Scholar, J-Gate, ZDB, BASE, OhioLINK, iThenticate, Open Access Articles, Open Science Directory, National Science Library of Chinese Academy of Sciences, The HKU Scholars Hub, etc..
IJITCS Vol. 17, No. 4, Aug. 2025
REGULAR PAPERS
The problems of managing modern complex organizational and manufacturing systems, such as international production corporations, regional economies, sectoral ministries, etc., in conditions of fierce competition are primarily related to the need to consider the activity of organizational and manufacturing objects that make up a multi-level manufacturing system, that is, the ability to efficiently solve the problem of coordinating interests. This problem cannot be solved efficiently without the use of modern scientific achievements and appropriate software. As an example, we can cite the active systems theory pioneered by Prof. V. M. Burkov and his students, which successfully claims to be a constructive implementation of the idea of coordinated planning. This paper proposes new models and methods of coordinated planning of two-level organizational and manufacturing systems. Our models and methods use original compromise criteria and the corresponding constructive algorithms. The original aggregated volume-time models are used as models of organizational and manufacturing objects. We present a well-founded software structure for the proposed methods of coordinated planning. It contains an intelligent interface for using the presented results in solving applied problems.
[...] Read more.The rapid proliferation of Unmanned Aerial Vehicles (UAVs) across military, commercial, and civilian domains creates unprecedented security challenges while simultaneously offering significant operational advantages. Current detection and tracking systems face mounting pressure to balance effectiveness with deployment complexity and cost constraints. This paper presents a geospatial detection and movement analysis system for Unmanned Aerial Vehicles that addresses critical security challenges through innovative mathematical and software solutions. The research introduces a methodology for UAV monitoring that minimizes sensor requirements, utilizing a single optical sensor equipped with distance measurement capabilities. The core of this work focuses on developing and evaluating an algorithm for three-dimensional (3D) coordinate determination and trajectory prediction without requiring direct altitude measurement. The proposed approach integrates computer vision detection results with a mathematical model that defines spatial relationships between camera parameters and detected objects. Specifically, the algorithm estimates altitude parameters and calculates probable flight trajectories by analyzing the correlation between apparent size variation and measured distance changes across continuous detections. The system implements a complete analytical pipeline, including continuous detection processing, geospatial coordinate transformation, trajectory vector calculation, and visualization on geographic interfaces. Its modular architecture supports real-time analysis of video streams, representing detected trajectories as vector projections with associated uncertainty metrics. The algorithm's capability to provide reliable trajectory predictions is demonstrated through validation in synthetically generated environments. It offers a cost-effective monitoring solution for small aerial objects across diverse environmental conditions. This research contributes to the development of minimally-instrumented UAV tracking systems applicable in both civilian and defense scenarios.
[...] Read more.The article is devoted to special methods for distributed databases that allow to accelerate data reconciliation in information systems, such as IoT, heterogeneous multi-computer systems, analytical administrative management systems, financial systems, scientific management systems, etc. A method for ensuring data consistency using a transaction clock is proposed and the results of experimental research for the developed prototype of a financial system are demonstrated. The transaction clock receives transactions from client applications and stores them in appropriate queues. The queues are processed based on the transaction priority. The highest priority queue is processed before the lowest priority queue. This allows you to determine which important data (such as financial transactions) should be processed first. The article justifies the replacement of the Merkle tree with a hashing algorithm and the use of the Bloom spectral filter to improve the Active Anti-Entropy method to accelerate eventual consistency. For its effective use, the filter generation algorithm is modified, which allowed to increase the speed of its generation and maintain a sufficient level of collision resistance.
[...] Read more.As software systems continue to grow more complex, evaluating software quality becomes increasingly critical. This study analyzes existing software quality models, including McCall, Boehm, FURPS, and ISO Systems and Software Engineering – Systems and Software Quality Requirements and Evaluation (SQuaRE), with a specific focus on the ISO/IEC 25010:2023 standard. The research aims to assess the completeness of these models and explore interdependencies among key quality attributes relevant to software requirements engineering. The paper identifies key characteristics and associated metrics based on ISO/IEC standards using comparative analysis and a literature review. Findings show that ISO/IEC 25010:2023 provides the most comprehensive structure, with Functional Suitability and Compatibility identified as essential due to their universally recommended metrics. Survey data from 328 practicing analysts in Ukraine and internationally demonstrate a gap between theoretical models and real-world requirements documentation practices, particularly for non-functional requirements. The identified dependencies between quality attributes enable a more integrated and structured approach to identifying and analyzing non-functional requirements in IT projects. The study emphasizes that software quality models must be tailored to project-specific goals and constraints, with attention to trade-offs and stakeholder needs during the requirements specification, prioritization, and validation processes. The findings support the adaptation of quality models to specific project constraints and emphasize the business analyst’s role in tailoring quality criteria for practical use in software development.
[...] Read more.The rapid growth of the video game industry and its reliance on digital distribution have created new opportunities for data-driven sales forecasting. Social media platforms serve as influential environments where consumer sentiment, trends, and discussions impact purchasing behaviors. This study examines the potential of using sentiment analysis of social media data to predict video game sales. While traditional sales forecasting models mainly depend on historical sales data and statistical techniques, sentiment analysis offers real-time insights into consumer interest and market demand. This paper reviews existing research on video game sales prediction, the application of sentiment analysis in the gaming industry, and sentiment-based forecasting models in other domains. The findings highlight a significant research gap in applying sentiment analysis to video game sales forecasting, despite its demonstrated efficacy in related fields. The study emphasizes the advantages and challenges of integrating sentiment analysis with traditional forecasting methods and proposes future research directions to enhance predictive accuracy.
[...] Read more.Scheduling is an NP-hard problem, and heuristic algorithms are unable to find approximate solutions within a feasible time frame. Efficient Task Scheduling (TS) in Cloud-Fog Computing (CFC) environments is crucial for meeting the diverse resource demands of modern applications. This paper introduces the Sewing Training-Based Optimization (STBO) algorithm, a novel approach to resource-aware task scheduling that effectively balances workloads across cloud and fog resources. STBO categorizes Virtual Machines (VMs) into low, medium, and high resource utilization queues based on their computational power and availability. By dynamically allocating tasks to these queues, STBO minimizes delays and ensures that tasks with stringent deadlines are executed in optimal environments, enhancing overall system performance. The algorithm leverages processing delays, task deadlines, and VM capabilities to assign tasks intelligently, reducing response times and improving resource utilization. Experimental results demonstrate that STBO outperforms existing scheduling algorithms in reducing makespan by 21.6%, improved energy usage by 31%, and maximizing throughput by 27.8%, making it well-suited for real-time, resource-intensive applications in CFC systems.
[...] Read more.The article presents a modern approach to analysing public opinion based on Ukrainian-language content from Telegram channels. This study presents a hybrid clustering approach that combines DBSCAN and K-means algorithms to analyse vectorised Ukrainian-language social media posts in order to detect public opinion trends. The methodology relies on a multilingual neural network–based text vectorisation model, which enables effective representation of the semantic content of posts. Experiments conducted on a corpus of 90 Ukrainian-language messages (collected between March and May 2025) allowed for the identification of six principal thematic clusters reflecting key areas of public discourse. Despite the small volume of the corpus (90 messages), the sample is structured and balanced by topic (news, vacancies, gaming), which allows you to test the effectiveness of the proposed methodology in conditions of limited data. This approach is appropriate in the case of the analysis of short texts in low-resource languages, where large-scale corpora are not available. A special advantage of this approach is the use of semantic vector representation and the construction of graphs of lexical co-occurrence networks (term co-occurrence networks), which demonstrate a stable topological structure even with small amounts of data. It allows you to identify latent topic patterns and coherent clusters that have the potential to scale to broader corpora. The authors acknowledge the limitations associated with sample size, but emphasise the role of this study as a pilot stage for the development of a universal, linguistically adaptive method for analysing public discourse. In the future, it is planned to expand the body to increase the representativeness and accuracy of the conclusions. The paper proposes a hybrid method of automatic thematic cluster analysis of short texts in social media, in particular Telegram. Vectorisation of Ukrainian-language messages is implemented using the transformer model multilingual-e5-large-instruct. A combination of HDBSCAN and K-means algorithms was used to detect clusters. More than 36,000 messages from three Telegram channels (news, games, vacancies) were analysed, and six main thematic clusters were identified. To identify thematic trends, a hybrid clustering approach was used, in which the HDBSCAN algorithm was used at the first stage to identify dense clusters and identify "noise" points, after which K-means were used to reclassify residual ("noise") embeddings to the nearest cluster centres.
Such a two-tier strategy allows you to combine the advantages of flexible allocation of free-form clusters from HDBSCAN and stable classification of less pronounced groups through K-means. It is especially effective when working with fragmented short texts of social networks. To validate the quality of clustering, both visualisation tools (PCA, t-SNE, word clouds) and quantitative metrics were used: Silhouette Score (0.41) and Davis-Boldin index (0.78), which indicate moderate coherence and resolution of clusters. Separately, the high level of "noise" in HDBSCAN (34.2%) was analysed, which may be due to the specifics of short texts, model parameters, or stylistic fragmentation of Telegram messages. The results obtained show the effectiveness of combining modern vectorisation models with flexible clustering methods to identify structured topics in fragmented Ukrainian-language content of social networks. The proposed approach has the potential to further expand to other sources, types of discourse, and tasks of digital sociology. As a result of processing 90 messages received from three different channels (news, gaming content, and vacancies), six main thematic clusters were identified. The largest share is occupied by clusters related to employment (28.2%) and security-patriotic topics (24.7%). The average level of "noise" after the initial HDBSCAN clustering was 34.2%. Additional analysis revealed that post lengths varied significantly, ranging from short announcements (average of 10 words) to analytical texts (over 140 words). Visualisations (timelines, PCA, t-SNE, word clouds, term co-occurrence graphs) confirm the thematic coherence of clusters and reveal changes in thematic priorities over time. The proposed system is an effective tool for detecting information trends in the environment of short, fragmented texts and can be used to monitor public sentiment in low-resource languages.
One area that has seen rapid growth and differing perspectives from many developers in recent years is document management. This idea has advanced beyond some of the steps where developers have made it simple for anyone to access papers in a matter of seconds. It is impossible to overstate the importance of document management systems as a necessity in the workplace environment of an organization. Interviews, scenario creation using participants' and stakeholders' first-hand accounts, and examination of current procedures and structures were all used to collect data. The development approach followed a software development methodology called Object-Oriented Hypermedia Design Methodology. With the help of Unified Modeling Language (UML) tools, a web-based electronic document management system (WBEDMS) was created. Its database was created using MySQL, and the system was constructed using web technologies including XAMPP, HTML, and PHP Programming language. The results of the system evaluation showed a successful outcome. After using the system that was created, respondents' satisfaction with it was 96.60%. This shows that the document system was regarded as adequate and excellent enough to achieve or meet the specified requirement when users (secretaries and departmental personnel) used it. Result showed that the system developed yielded an accuracy of 95% and usability of 99.20%. The report came to the conclusion that a suggested electronic document management system would improve user happiness, boost productivity, and guarantee time and data efficiency. It follows that well-known document management systems undoubtedly assist in holding and managing a substantial portion of the knowledge assets, which include documents and other associated items, of Organizations.
[...] Read more.A sizeable number of women face difficulties during pregnancy, which eventually can lead the fetus towards serious health problems. However, early detection of these risks can save both the invaluable life of infants and mothers. Cardiotocography (CTG) data provides sophisticated information by monitoring the heart rate signal of the fetus, is used to predict the potential risks of fetal wellbeing and for making clinical conclusions. This paper proposed to analyze the antepartum CTG data (available on UCI Machine Learning Repository) and develop an efficient tree-based ensemble learning (EL) classifier model to predict fetal health status. In this study, EL considers the Stacking approach, and a concise overview of this approach is discussed and developed accordingly. The study also endeavors to apply distinct machine learning algorithmic techniques on the CTG dataset and determine their performances. The Stacking EL technique, in this paper, involves four tree-based machine learning algorithms, namely, Random Forest classifier, Decision Tree classifier, Extra Trees classifier, and Deep Forest classifier as base learners. The CTG dataset contains 21 features, but only 10 most important features are selected from the dataset with the Chi-square method for this experiment, and then the features are normalized with Min-Max scaling. Following that, Grid Search is applied for tuning the hyperparameters of the base algorithms. Subsequently, 10-folds cross validation is performed to select the meta learner of the EL classifier model. However, a comparative model assessment is made between the individual base learning algorithms and the EL classifier model; and the finding depicts EL classifiers’ superiority in fetal health risks prediction with securing the accuracy of about 96.05%. Eventually, this study concludes that the Stacking EL approach can be a substantial paradigm in machine learning studies to improve models’ accuracy and reduce the error rate.
[...] Read more.Artificial Neural Network is a branch of Artificial intelligence and has been accepted as a new computing technology in computer science fields. This paper reviews the field of Artificial intelligence and focusing on recent applications which uses Artificial Neural Networks (ANN’s) and Artificial Intelligence (AI). It also considers the integration of neural networks with other computing methods Such as fuzzy logic to enhance the interpretation ability of data. Artificial Neural Networks is considers as major soft-computing technology and have been extensively studied and applied during the last two decades. The most general applications where neural networks are most widely used for problem solving are in pattern recognition, data analysis, control and clustering. Artificial Neural Networks have abundant features including high processing speeds and the ability to learn the solution to a problem from a set of examples. The main aim of this paper is to explore the recent applications of Neural Networks and Artificial Intelligence and provides an overview of the field, where the AI & ANN’s are used and discusses the critical role of AI & NN played in different areas.
[...] Read more.The numerical value of k in a k-fold cross-validation training technique of machine learning predictive models is an essential element that impacts the model’s performance. A right choice of k results in better accuracy, while a poorly chosen value for k might affect the model’s performance. In literature, the most commonly used values of k are five (5) or ten (10), as these two values are believed to give test error rate estimates that suffer neither from extremely high bias nor very high variance. However, there is no formal rule. To the best of our knowledge, few experimental studies attempted to investigate the effect of diverse k values in training different machine learning models. This paper empirically analyses the prevalence and effect of distinct k values (3, 5, 7, 10, 15 and 20) on the validation performance of four well-known machine learning algorithms (Gradient Boosting Machine (GBM), Logistic Regression (LR), Decision Tree (DT) and K-Nearest Neighbours (KNN)). It was observed that the value of k and model validation performance differ from one machine-learning algorithm to another for the same classification task. However, our empirical suggest that k = 7 offers a slight increase in validations accuracy and area under the curve measure with lesser computational complexity than k = 10 across most MLA. We discuss in detail the study outcomes and outline some guidelines for beginners in the machine learning field in selecting the best k value and machine learning algorithm for a given task.
[...] Read more.The Marksheet Generator is flexible for generating progress mark sheet of students. This system is mainly based in the database technology and the credit based grading system (CBGS). The system is targeted to small enterprises, schools, colleges and universities. It can produce sophisticated ready-to-use mark sheet, which could be created and will be ready to print. The development of a marksheet and gadget sheet is focusing at describing tables with columns/rows and sub-column sub-rows, rules of data selection and summarizing for report, particular table or column/row, and formatting the report in destination document. The adjustable data interface will be popular data sources (SQL Server) and report destinations (PDF file). Marksheet generation system can be used in universities to automate the distribution of digitally verifiable mark-sheets of students. The system accesses the students’ exam information from the university database and generates the gadget-sheet Gadget sheet keeps the track of student information in properly listed manner. The project aims at developing a marksheet generation system which can be used in universities to automate the distribution of digitally verifiable student result mark sheets. The system accesses the students’ results information from the institute student database and generates the mark sheets in Portable Document Format which is tamper proof which provides the authenticity of the document. Authenticity of the document can also be verified easily.
[...] Read more.One of the main reasons for mortality among people is traffic accidents. The percentage of traffic accidents in the world has increased to become the third in the expected causes of death in 2020. In Saudi Arabia, there are more than 460,000 car accidents every year. The number of car accidents in Saudi Arabia is rising, especially during busy periods such as Ramadan and the Hajj season. The Saudi Arabia’s government is making the required efforts to lower the nations of car accident rate. This paper suggests a business process improvement for car accident reports handled by Najm in accordance with the Saudi Vision 2030. According to drone success in many fields (e.g., entertainment, monitoring, and photography), the paper proposes using drones to respond to accident reports, which will help to expedite the process and minimize turnaround time. In addition, the drone provides quick accident response and recording scenes with accurate results. The Business Process Management (BPM) methodology is followed in this proposal. The model was validated by comparing before and after simulation results which shows a significant impact on performance about 40% regarding turnaround time. Therefore, using drones can enhance the process of accident response with Najm in Saudi Arabia.
[...] Read more.The healthcare system is a knowledge driven industry which consists of vast and growing volumes of narrative information obtained from discharge summaries/reports, physicians case notes, pathologists as well as radiologists reports. This information is usually stored in unstructured and non-standardized formats in electronic healthcare systems which make it difficult for the systems to understand the information contents of the narrative information. Thus, the access to valuable and meaningful healthcare information for decision making is a challenge. Nevertheless, Natural Language Processing (NLP) techniques have been used to structure narrative information in healthcare. Thus, NLP techniques have the capability to capture unstructured healthcare information, analyze its grammatical structure, determine the meaning of the information and translate the information so that it can be easily understood by the electronic healthcare systems. Consequently, NLP techniques reduce cost as well as improve the quality of healthcare. It is therefore against this background that this paper reviews the NLP techniques used in healthcare, their applications as well as their limitations.
[...] Read more.Wildfires are increasingly destructive natural disasters, annually consuming millions of acres of forests and vegetation globally. The complex interactions among fuels, topography, and meteorological factors, including temperature, precipitation, humidity, and wind, govern wildfire ignition and spread. This research presents a framework that integrates satellite remote sensing and numerical weather prediction model data to refine estimations of final wildfire sizes. A key strength of our approach is the use of comprehensive geospatial datasets from the IBM PAIRS platform, which provides a robust foundation for our predictions. We implement machine learning techniques through the AutoGluon automated machine learning toolkit to determine the optimal model for burned area prediction. AutoGluon automates the process of feature engineering, model selection, and hyperparameter tuning, evaluating a diverse range of algorithms, including neural networks, gradient boosting, and ensemble methods, to identify the most effective predictor for wildfire area estimation. The system features an intuitive interface developed in Gradio, which allows the incorporation of key input parameters, such as vegetation indices and weather variables, to customize wildfire projections. Interactive Plotly visualizations categorize the predicted fire severity levels across regions. This study demonstrates the value of synergizing Earth observations from spaceborne instruments and forecast data from numerical models to strengthen real-time wildfire monitoring and postfire impact assessment capabilities for improved disaster management. We optimize an ensemble model by comparing various algorithms to minimize the root mean squared error between the predicted and actual burned areas, achieving improved predictive performance over any individual model. The final metric reveals that our optimized WeightedEnsemble model achieved a root mean squared error (RMSE) of 1.564 km2 on the test data, indicating an average deviation of approximately 1.2 km2 in the predictions.
[...] Read more.Universities across the globe have increasingly adopted Enterprise Resource Planning (ERP) systems, a software that provides integrated management of processes and transactions in real-time. These systems contain lots of information hence require secure authentication. Authentication in this case refers to the process of verifying an entity’s or device’s identity, to allow them access to specific resources upon request. However, there have been security and privacy concerns around ERP systems, where only the traditional authentication method of a username and password is commonly used. A password-based authentication approach has weaknesses that can be easily compromised. Cyber-attacks to access these ERP systems have become common to institutions of higher learning and cannot be underestimated as they evolve with emerging technologies. Some universities worldwide have been victims of cyber-attacks which targeted authentication vulnerabilities resulting in damages to the institutions reputations and credibilities. Thus, this research aimed at establishing authentication methods used for ERPs in Kenyan universities, their vulnerabilities, and proposing a solution to improve on ERP system authentication. The study aimed at developing and validating a multi-factor authentication prototype to improve ERP systems security. Multi-factor authentication which combines several authentication factors such as: something the user has, knows, or is, is a new state-of-the-art technology that is being adopted to strengthen systems’ authentication security. This research used an exploratory sequential design that involved a survey of chartered Kenyan Universities, where questionnaires were used to collect data that was later analyzed using descriptive and inferential statistics. Stratified, random and purposive sampling techniques were used to establish the sample size and the target group. The dependent variable for the study was limited to security rating with respect to realization of confidentiality, integrity, availability, and usability while the independent variables were limited to adequacy of security, authentication mechanisms, infrastructure, information security policies, vulnerabilities, and user training. Correlation and regression analysis established vulnerabilities, information security policies, and user training to be having a higher impact on system security. The three variables hence acted as the basis for the proposed multi-factor authentication framework for improve ERP systems security.
[...] Read more.Markov models are one of the widely used techniques in machine learning to process natural language. Markov Chains and Hidden Markov Models are stochastic techniques employed for modeling systems that are dynamic and where the future state relies on the current state. The Markov chain, which generates a sequence of words to create a complete sentence, is frequently used in generating natural language. The hidden Markov model is employed in named-entity recognition and the tagging of parts of speech, which tries to predict hidden tags based on observed words. This paper reviews Markov models' use in three applications of natural language processing (NLP): natural language generation, named-entity recognition, and parts of speech tagging. Nowadays, researchers try to reduce dependence on lexicon or annotation tasks in NLP. In this paper, we have focused on Markov Models as a stochastic approach to process NLP. A literature review was conducted to summarize research attempts with focusing on methods/techniques that used Markov Models to process NLP, their advantages, and disadvantages. Most NLP research studies apply supervised models with the improvement of using Markov models to decrease the dependency on annotation tasks. Some others employed unsupervised solutions for reducing dependence on a lexicon or labeled datasets.
[...] Read more.One area that has seen rapid growth and differing perspectives from many developers in recent years is document management. This idea has advanced beyond some of the steps where developers have made it simple for anyone to access papers in a matter of seconds. It is impossible to overstate the importance of document management systems as a necessity in the workplace environment of an organization. Interviews, scenario creation using participants' and stakeholders' first-hand accounts, and examination of current procedures and structures were all used to collect data. The development approach followed a software development methodology called Object-Oriented Hypermedia Design Methodology. With the help of Unified Modeling Language (UML) tools, a web-based electronic document management system (WBEDMS) was created. Its database was created using MySQL, and the system was constructed using web technologies including XAMPP, HTML, and PHP Programming language. The results of the system evaluation showed a successful outcome. After using the system that was created, respondents' satisfaction with it was 96.60%. This shows that the document system was regarded as adequate and excellent enough to achieve or meet the specified requirement when users (secretaries and departmental personnel) used it. Result showed that the system developed yielded an accuracy of 95% and usability of 99.20%. The report came to the conclusion that a suggested electronic document management system would improve user happiness, boost productivity, and guarantee time and data efficiency. It follows that well-known document management systems undoubtedly assist in holding and managing a substantial portion of the knowledge assets, which include documents and other associated items, of Organizations.
[...] Read more.A sizeable number of women face difficulties during pregnancy, which eventually can lead the fetus towards serious health problems. However, early detection of these risks can save both the invaluable life of infants and mothers. Cardiotocography (CTG) data provides sophisticated information by monitoring the heart rate signal of the fetus, is used to predict the potential risks of fetal wellbeing and for making clinical conclusions. This paper proposed to analyze the antepartum CTG data (available on UCI Machine Learning Repository) and develop an efficient tree-based ensemble learning (EL) classifier model to predict fetal health status. In this study, EL considers the Stacking approach, and a concise overview of this approach is discussed and developed accordingly. The study also endeavors to apply distinct machine learning algorithmic techniques on the CTG dataset and determine their performances. The Stacking EL technique, in this paper, involves four tree-based machine learning algorithms, namely, Random Forest classifier, Decision Tree classifier, Extra Trees classifier, and Deep Forest classifier as base learners. The CTG dataset contains 21 features, but only 10 most important features are selected from the dataset with the Chi-square method for this experiment, and then the features are normalized with Min-Max scaling. Following that, Grid Search is applied for tuning the hyperparameters of the base algorithms. Subsequently, 10-folds cross validation is performed to select the meta learner of the EL classifier model. However, a comparative model assessment is made between the individual base learning algorithms and the EL classifier model; and the finding depicts EL classifiers’ superiority in fetal health risks prediction with securing the accuracy of about 96.05%. Eventually, this study concludes that the Stacking EL approach can be a substantial paradigm in machine learning studies to improve models’ accuracy and reduce the error rate.
[...] Read more.The numerical value of k in a k-fold cross-validation training technique of machine learning predictive models is an essential element that impacts the model’s performance. A right choice of k results in better accuracy, while a poorly chosen value for k might affect the model’s performance. In literature, the most commonly used values of k are five (5) or ten (10), as these two values are believed to give test error rate estimates that suffer neither from extremely high bias nor very high variance. However, there is no formal rule. To the best of our knowledge, few experimental studies attempted to investigate the effect of diverse k values in training different machine learning models. This paper empirically analyses the prevalence and effect of distinct k values (3, 5, 7, 10, 15 and 20) on the validation performance of four well-known machine learning algorithms (Gradient Boosting Machine (GBM), Logistic Regression (LR), Decision Tree (DT) and K-Nearest Neighbours (KNN)). It was observed that the value of k and model validation performance differ from one machine-learning algorithm to another for the same classification task. However, our empirical suggest that k = 7 offers a slight increase in validations accuracy and area under the curve measure with lesser computational complexity than k = 10 across most MLA. We discuss in detail the study outcomes and outline some guidelines for beginners in the machine learning field in selecting the best k value and machine learning algorithm for a given task.
[...] Read more.One of the main reasons for mortality among people is traffic accidents. The percentage of traffic accidents in the world has increased to become the third in the expected causes of death in 2020. In Saudi Arabia, there are more than 460,000 car accidents every year. The number of car accidents in Saudi Arabia is rising, especially during busy periods such as Ramadan and the Hajj season. The Saudi Arabia’s government is making the required efforts to lower the nations of car accident rate. This paper suggests a business process improvement for car accident reports handled by Najm in accordance with the Saudi Vision 2030. According to drone success in many fields (e.g., entertainment, monitoring, and photography), the paper proposes using drones to respond to accident reports, which will help to expedite the process and minimize turnaround time. In addition, the drone provides quick accident response and recording scenes with accurate results. The Business Process Management (BPM) methodology is followed in this proposal. The model was validated by comparing before and after simulation results which shows a significant impact on performance about 40% regarding turnaround time. Therefore, using drones can enhance the process of accident response with Najm in Saudi Arabia.
[...] Read more.Wildfires are increasingly destructive natural disasters, annually consuming millions of acres of forests and vegetation globally. The complex interactions among fuels, topography, and meteorological factors, including temperature, precipitation, humidity, and wind, govern wildfire ignition and spread. This research presents a framework that integrates satellite remote sensing and numerical weather prediction model data to refine estimations of final wildfire sizes. A key strength of our approach is the use of comprehensive geospatial datasets from the IBM PAIRS platform, which provides a robust foundation for our predictions. We implement machine learning techniques through the AutoGluon automated machine learning toolkit to determine the optimal model for burned area prediction. AutoGluon automates the process of feature engineering, model selection, and hyperparameter tuning, evaluating a diverse range of algorithms, including neural networks, gradient boosting, and ensemble methods, to identify the most effective predictor for wildfire area estimation. The system features an intuitive interface developed in Gradio, which allows the incorporation of key input parameters, such as vegetation indices and weather variables, to customize wildfire projections. Interactive Plotly visualizations categorize the predicted fire severity levels across regions. This study demonstrates the value of synergizing Earth observations from spaceborne instruments and forecast data from numerical models to strengthen real-time wildfire monitoring and postfire impact assessment capabilities for improved disaster management. We optimize an ensemble model by comparing various algorithms to minimize the root mean squared error between the predicted and actual burned areas, achieving improved predictive performance over any individual model. The final metric reveals that our optimized WeightedEnsemble model achieved a root mean squared error (RMSE) of 1.564 km2 on the test data, indicating an average deviation of approximately 1.2 km2 in the predictions.
[...] Read more.Universities across the globe have increasingly adopted Enterprise Resource Planning (ERP) systems, a software that provides integrated management of processes and transactions in real-time. These systems contain lots of information hence require secure authentication. Authentication in this case refers to the process of verifying an entity’s or device’s identity, to allow them access to specific resources upon request. However, there have been security and privacy concerns around ERP systems, where only the traditional authentication method of a username and password is commonly used. A password-based authentication approach has weaknesses that can be easily compromised. Cyber-attacks to access these ERP systems have become common to institutions of higher learning and cannot be underestimated as they evolve with emerging technologies. Some universities worldwide have been victims of cyber-attacks which targeted authentication vulnerabilities resulting in damages to the institutions reputations and credibilities. Thus, this research aimed at establishing authentication methods used for ERPs in Kenyan universities, their vulnerabilities, and proposing a solution to improve on ERP system authentication. The study aimed at developing and validating a multi-factor authentication prototype to improve ERP systems security. Multi-factor authentication which combines several authentication factors such as: something the user has, knows, or is, is a new state-of-the-art technology that is being adopted to strengthen systems’ authentication security. This research used an exploratory sequential design that involved a survey of chartered Kenyan Universities, where questionnaires were used to collect data that was later analyzed using descriptive and inferential statistics. Stratified, random and purposive sampling techniques were used to establish the sample size and the target group. The dependent variable for the study was limited to security rating with respect to realization of confidentiality, integrity, availability, and usability while the independent variables were limited to adequacy of security, authentication mechanisms, infrastructure, information security policies, vulnerabilities, and user training. Correlation and regression analysis established vulnerabilities, information security policies, and user training to be having a higher impact on system security. The three variables hence acted as the basis for the proposed multi-factor authentication framework for improve ERP systems security.
[...] Read more.Artificial Neural Network is a branch of Artificial intelligence and has been accepted as a new computing technology in computer science fields. This paper reviews the field of Artificial intelligence and focusing on recent applications which uses Artificial Neural Networks (ANN’s) and Artificial Intelligence (AI). It also considers the integration of neural networks with other computing methods Such as fuzzy logic to enhance the interpretation ability of data. Artificial Neural Networks is considers as major soft-computing technology and have been extensively studied and applied during the last two decades. The most general applications where neural networks are most widely used for problem solving are in pattern recognition, data analysis, control and clustering. Artificial Neural Networks have abundant features including high processing speeds and the ability to learn the solution to a problem from a set of examples. The main aim of this paper is to explore the recent applications of Neural Networks and Artificial Intelligence and provides an overview of the field, where the AI & ANN’s are used and discusses the critical role of AI & NN played in different areas.
[...] Read more.Web applications are becoming very important in our lives as many sensitive processes depend on them. Therefore, it is critical for safety and invulnerability against malicious attacks. Most studies focus on ways to detect these attacks individually. In this study, we develop a new vulnerability system to detect and prevent vulnerabilities in web applications. It has multiple functions to deal with some recurring vulnerabilities. The proposed system provided the detection and prevention of four types of vulnerabilities, including SQL injection, cross-site scripting attacks, remote code execution, and fingerprinting of backend technologies. We investigated the way worked for every type of vulnerability; then the process of detecting each type of vulnerability; finally, we provided prevention for each type of vulnerability. Which achieved three goals: reduce testing costs, increase efficiency, and safety. The proposed system has been validated through a practical application on a website, and experimental results demonstrate its effectiveness in detecting and preventing security threats. Our study contributes to the field of security by presenting an innovative approach to addressing security concerns, and our results highlight the importance of implementing advanced detection and prevention methods to protect against potential cyberattacks. The significance and research value of this survey lies in its potential to enhance the security of online systems and reduce the risk of data breaches.
[...] Read more.Process Mining (PM) and PM tool abilities play a significant role in meeting the needs of organizations in terms of getting benefits from their processes and event data, especially in this digital era. The success of PM initiatives in producing effective and efficient outputs and outcomes that organizations desire is largely dependent on the capabilities of the PM tools. This importance of the tools makes the selection of them for a specific context critical. In the selection process of appropriate tools, a comparison of them can lead organizations to an effective result. In order to meet this need and to give insight to both practitioners and researchers, in our study, we systematically reviewed the literature and elicited the papers that compare PM tools, yielding comprehensive results through a comparison of available PM tools. It specifically delivers tools’ comparison frequency, methods and criteria used to compare them, strengths and weaknesses of the compared tools for the selection of appropriate PM tools, and findings related to the identified papers' trends and demographics. Although some articles conduct a comparison for the PM tools, there is a lack of literature reviews on the studies that compare PM tools in the market. As far as we know, this paper presents the first example of a review in literature in this regard.
[...] Read more.This scientific article presents the results of a study focused on the current practices and future prospects of AI-tools usage, specifically large language models (LLMs), in software development (SD) processes within European IT companies. The Pan-European study covers 35 SD teams from all regions of Europe and consists of three sections: the first section explores the current adoption of AI-tools in software production, the second section addresses common challenges in LLMs implementation, and the third section provides a forecast of the tech future in AI-tools development for SD.
The study reveals that AI-tools, particularly LLMs, have gained popularity and approbation in European IT companies for tasks related to software design and construction, coding, and software documentation. However, their usage for business and system analysis remains limited. Nevertheless, challenges such as resource constraints and organizational resistance are evident.
The article also highlights the potential of AI-tools in the software development process, such as automating routine operations, speeding up work processes, and enhancing software product excellence. Moreover, the research examines the transformation of IT paradigms driven by AI-tools, leading to changes in the skill sets of software developers. Although the impact of LLMs on the software development industry is perceived as modest, experts anticipate significant changes in the next 10 years, including AI-tools integration into advanced IDEs, software project management systems, and product management tools.
Ethical concerns about data ownership, information security and legal aspects of AI-tools usage are also discussed, with experts emphasizing the need for legal formalization and regulation in the AI domain. Overall, the study highlights the growing importance and potential of AI-tools in software development, as well as the need for careful consideration of challenges and ethical implications to fully leverage their benefits.